127 research outputs found

    Streaming Algorithm for Euler Characteristic Curves of Multidimensional Images

    Full text link
    We present an efficient algorithm to compute Euler characteristic curves of gray scale images of arbitrary dimension. In various applications the Euler characteristic curve is used as a descriptor of an image. Our algorithm is the first streaming algorithm for Euler characteristic curves. The usage of streaming removes the necessity to store the entire image in RAM. Experiments show that our implementation handles terabyte scale images on commodity hardware. Due to lock-free parallelism, it scales well with the number of processor cores. Our software---CHUNKYEuler---is available as open source on Bitbucket. Additionally, we put the concept of the Euler characteristic curve in the wider context of computational topology. In particular, we explain the connection with persistence diagrams

    Making decisions about saving energy in compressed air systems using Ambient Intelligence and Artificial Intelligence

    Get PDF
    Compressed air systems are often the most expensive and inefficient industrial systems. For every 10 units of energy, less than 1 unit turns into useful compressed air. Air compressors tend to be kept fully on even if they are not (all) needed. The research proposed in this short paper will combinereal time ambient sensing with Artificial Intelligence andKnowledge Management to automatically improve efficiency in energy intensive manufacturing. The research will minimise energy use for air compressors based on real-time manufacturing conditions (and anticipated future requirements). Ambient datawill provide detailed information on performance. Artificial Intelligence will make sense of that data and automatically act. Knowledge Management will facilitate the processing of information to advise human operators on actions to reduce energy use and maintain productivity. The aim is to create new intelligent techniques to save energy in compressed air systems

    Big Data Fusion Model for Heterogeneous Financial Market Data (FinDF)

    Get PDF
    The dawn of big data has seen the volume, variety, and velocity of data sources increase dramatically. Enormous amounts of structured, semi-structured and unstructured heterogeneous data can be garnered at a rapid rate, making analysis of such big data a herculean task. This has never been truer for data relating to financial stock markets, the biggest challenge being the 7 Vs of big data which relate to the collection, pre-processing, storage and real-time processing of such huge quantities of disparate data sources. Data fusion techniques have been adopted in a wide number of fields to cope with such vast amounts of heterogeneous data from multiple sources and fuse them together in order to produce a more comprehensive view of the data and its underlying relationships. Research into the fusing of heterogeneous financial data is scant within the literature, with existing work only taking into consideration the fusing of text-based financial documents. The lack of integration between financial stock market data, social media comments, financial discussion board posts and broker agencies means that the benefits of data fusion are not being realised to their full potential. This paper proposes a novel data fusion model, inspired by the data fusion model introduced by the Joint Directors of Laboratories, for the fusing of disparate data sources relating to financial stocks. Data with a diverse set of features from different data sources will supplement each other in order to obtain a Smart Data Layer, which will assist in scenarios such as irregularity detection and prediction of stock prices

    A distributed sensor network for video surveillance of outdoor environments

    No full text
    A distributed sensor network (DSN) for video surveillance is here presented. The system is able to manage heterogeneous sensors (e.g. optical, infrared, radar, etc.) to operate during night and day and in presence of different weather conditions (e.g. fog, rain, etc.). Data fusion is therefore mandatory and exploited at different levels to integrate the information produced by the sensors. The architecture presented has low network requirements, is easily scalable, maintainable and allows an easy distribution of the system on a wide outdoor area

    Real-time thresholding with Euler numbers

    No full text
    The problem of finding an automatic thresholding technique is well known in applications involving image differencing like visual-based surveillance systems, autonomous vehicle driving, etc. Among the algorithms proposed in the past years, the thresholding technique based on the stable Euler number method is considered one of the most promising in terms of visual results. Unfortunately its high computational complexity made it an impossible choice for real-time applications. The implementation here proposed, called fast Euler numbers, overcomes the problem since it calculates all the Euler numbers in just one single raster scan of the image. That is, it runs in O(hw), where h and w are the image's height and width, respectively. A technique for determining the optimal threshold, called zero crossing, is also proposed

    New Trends For Enhancing Maritime Situational Awareness

    No full text
    Maritime security depends on the ability and capability to build \u201ca comprehensive awareness of maritime activity, which encompasses territorial and international waters, and to act accordingly\u201d [1]. This level of awareness can be reached in several steps, evolving from single-sensor into multi-sensor multi-cue systems, exploiting heterogeneous information extracted by different (hard and soft) sources, and integrating contextual information with the purpose of generating a faithful and timely comprehensive maritime picture. Heterogeneity and context play a crucial role at different levels of fusion, allowing the combination of complementary and multifaceted information, which should be shared among national or international actors. In this paper we aim at discussing why these global trends must be adopted and expanded to build analytical models which can effectively face asymmetric threats

    Fusing contextual word embeddings for concreteness estimation

    No full text
    Natural Language Processing (NLP) has a long history, and recent research has focused in particular on encoding meaning in a computable way. Word embeddings have been used for this specific purpose, allowing language tasks to be treated as mathematical problems. Real valued vectors have been generated or employed as word representations for several NLP tasks. In this work, different types of pre-trained word embeddings are fused together to estimate word concreteness. In the evaluation of this task, we have taken into account how much contextual information can affect final results, and also how to properly fuse different word embeddings in order to maximize their performance. The best architecture in our study surpasses the winning solution in the Evalita 2020 competition for the word concreteness task
    corecore